Jeffrey Mark Siskind, associate professor in Purdue’s School of Electrical and Computer Engineering, poses with Darth Vader, a language-learning robot that has shown the ability to create its own path of movement based on numerous previous experiences. (Purdue Research Foundation photo/Curt Slyder)

The U.S. Army used robots in its searches for improvised explosive devices on the battlegrounds of Iraq for about a decade. But those robots are really just rugged remote-control cars that have to be controlled by a human soldier, who in turn had to be protected by another soldier.

One Army officer saw the operations, and came up with another idea: a language-learning robot. That soldier, Scott Bronikowski, eventually earned his doctorate in electrical and computer engineering from Purdue in his quest to pursue his vision of a robot that learns the meaning of words and acts accordingly.

“We do a lot with robots in the Army,” said Bronikowski, in a statement released by the school. “I thought if we had a robot that could be controlled by voice it would make things much easier and safer. I was thinking something like R2D2 from ‘Star Wars.’”

Bronikowski earned his doctorate last year, but the team of scientists at Purdue’s School of Electrical and Computer Engineering are still working toward making the language-learning robot a practical reality.

“It’s our hope that this technology can be applied to a host of applications in the future, potentially including autonomous vehicles,” said Jeffrey Mark Siskind, the associate professor leading the work.

Three algorithms allow the robot to learn the meanings of words within sentences that describe its route. The robot then takes the words and can generate its own sentence to describe the path, and then understand further instructions.

But this language is not just simple commands. The trials involved taking the camera-equipped robot on an enclosed course, and telling it to avoid obstacles like a chair, a traffic cone and a table.

The robot learned the words– and then generated its own sentences due to the accumulation of visual data.

Siskind said the autonomous vehicle learning was equivalent to searching for videos on YouTube – but instead of looking for keywords or descriptors, the robot was looking for visual stimulus from its cameras.

“What we’re doing with our research is actually recognizing what’s going on in the video,” he explained.

The project was seeded by a grant from the National Robotics Initiative. But to further the work, the researchers are looking to continue expanding the digital vocabulary of the machine.

We’re looking for collaborators and additional funding to help move this technology forward,” Siskind added.

Bronikowski told Forensic Magazine that he was told he was accepting an early retirement the day after he defended his doctoral thesis at Purdue last year. Currently employed at General Motors, he works on other aspects of autonomous vehicles for that motor company.